LSOS: Line-search second-order stochastic optimization methods for nonconvex finite sums

نویسندگان

چکیده

We develop a line-search second-order algorithmic framework for minimizing finite sums. do not make any convexity assumptions, but require the terms of sum to be continuously differentiable and have Lipschitz-continuous gradients. The methods fitting into this combine line searches suitably decaying step lengths. A key issue is two-step sampling at each iteration, which allows us control error present in procedure. Stationarity limit points proved almost-sure sense, while convergence sequence approximations solution holds with additional hypothesis that functions are strongly convex. Numerical experiments, including comparisons state-of-the art stochastic optimization methods, show efficiency our approach.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Proximal Stochastic Methods for Nonsmooth Nonconvex Finite-Sum Optimization

We analyze stochastic algorithms for optimizing nonconvex, nonsmooth finite-sum problems, where the nonsmooth part is convex. Surprisingly, unlike the smooth case, our knowledge of this fundamental problem is very limited. For example, it is not known whether the proximal stochastic gradient method with constant minibatch converges to a stationary point. To tackle this issue, we develop fast st...

متن کامل

Complexity Analysis of Second-order Line-search Algorithms for Smooth Nonconvex Optimization∗

There has been much recent interest in finding unconstrained local minima of smooth functions, due in part of the prevalence of such problems in machine learning and robust statistics. A particular focus is algorithms with good complexity guarantees. Second-order Newton-type methods that make use of regularization and trust regions have been analyzed from such a perspective. More recent proposa...

متن کامل

Parallel stochastic line search methods with feedback for minimizing finite sums

We consider unconstrained minimization of a finite sum of N continuously differentiable, not necessarily convex, cost functions. Several gradientlike (and more generally, line search) methods, where the full gradient (the sum of N component costs’ gradients) at each iteration k is replaced with an inexpensive approximation based on a sub-sample Nk of the component costs’ gradients, are availabl...

متن کامل

Fast Stochastic Methods for Nonsmooth Nonconvex Optimization

We analyze stochastic algorithms for optimizing nonconvex, nonsmooth finite-sum problems, where the nonconvex part is smooth and the nonsmooth part is convex. Surprisingly, unlike the smooth case, our knowledge of this fundamental problem is very limited. For example, it is not known whether the proximal stochastic gradient method with constant minibatch converges to a stationary point. To tack...

متن کامل

Stochastic First- and Zeroth-order Methods for Nonconvex Stochastic Programming

In this paper, we introduce a new stochastic approximation (SA) type algorithm, namely the randomized stochastic gradient (RSG) method, for solving an important class of nonlinear (possibly nonconvex) stochastic programming (SP) problems. We establish the complexity of this method for computing an approximate stationary point of a nonlinear programming problem. We also show that this method pos...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Mathematics of Computation

سال: 2022

ISSN: ['1088-6842', '0025-5718']

DOI: https://doi.org/10.1090/mcom/3802